7 research outputs found

    Current Trends in Digital Twin Development, Maintenance, and Operation:An Interview Study

    Get PDF
    Digital twins (DT) are often defined as a pairing of a physical entity and a corresponding virtual entity mimicking certain aspects of the former depending on the use-case. In recent years, this concept has facilitated numerous use-cases ranging from design to validation and predictive maintenance of large and small high-tech systems. Although growing in popularity in both industry and academia, digital twins and the methodologies for developing and maintaining them differ vastly. To better understand these differences and similarities, we performed a semi-structured interview research study with 19 professionals from industry and academia who are closely associated with different lifecycle stages of the corresponding digital twins. In this paper, we present our analysis and findings from this study, which is based on eight research questions (RQ). We present our findings per research question. In general, we identified an overall lack of uniformity in terms of the understanding of digital twins and used tools, techniques, and methodologies for their development and maintenance. Furthermore, considering that digital twins are software intensive systems, we recognize a significant growth potential for adopting more software engineering practices, processes, and expertise in various stages of a digital twin's lifecycle

    Current Trends in Digital Twin Development, Maintenance, and Operation: An Interview Study

    Full text link
    Digital twins (DT) are often defined as a pairing of a physical entity and a corresponding virtual entity mimicking certain aspects of the former depending on the use-case. In recent years, this concept has facilitated numerous use-cases ranging from design to validation and predictive maintenance of large and small high-tech systems. Although growing in popularity in both industry and academia, digital twins and the methodologies for developing and maintaining them differ vastly. To better understand these differences and similarities, we performed a semi-structured interview research study with 19 professionals from industry and academia who are closely associated with different lifecycle stages of the corresponding digital twins. In this paper, we present our analysis and findings from this study, which is based on eight research questions (RQ). We present our findings per research question. In general, we identified an overall lack of uniformity in terms of the understanding of digital twins and used tools, techniques, and methodologies for their development and maintenance. Furthermore, considering that digital twins are software intensive systems, we recognize a significant growth potential for adopting more software engineering practices, processes, and expertise in various stages of a digital twin's lifecycle

    Model as a Service : Towards a Discovery Platform for Internet of Food

    Get PDF
    The Internet of Food (INoF) consortium, which is part of Sustainable Food Initiative (SFI), aims to address the future food safety challenges using engineering solutions to make the production process more efficient and sustainable. Inter-organization collaboration can stimulate fast innovation and sustainable research processes by significantly reducing data loss as well as miscommunication. Such collaboration requires an appropriate digital infrastructure that can maintain interoperability among diverse data formats from different sources. This infrastructure should also be able to facilitate sharing of data and services without companies having to share IP (Intellectual Property) or replicate corresponding execution environments. As part of the INoF, this project aims to develop a prototype for such infrastructure and set up a baseline for building an effective model discovery platform. In this context, models are computational units that can provide insights into food products. Having access to results from more models, companies can make better decisions and speed up product development. During this project, a microservice based architecture was de-signed and a prototype was developed that exploited the idea of Model as a Service (MaaS). It has the functionality to offer models in the form of web services allowing organizations other than the owner of the models to use them. For achieving interoperability among different data sources in the context of this project, functionalities, such as dynamic model parameter mapping and on-demand unit conversion, were implemented into this prototype. After execution, results from several models belonging to different organizations can also be viewed through this platform. One of the major goals of this project was to demonstrate the benefits and possibilities of sharing model results to attract further collab-oration. Therefore, several INoF partners were closely involved in this project. The MaaS prototype was also demonstrated to all the INoF partners and earned quite a few appreciations

    Models Meet Data: Challenges to Create Virtual Entities for Digital Twins

    No full text
    In recent years, digital twin (DT) technology has moved to the center of attention of many researchers and engineers. Commonly, a digital twin is defined based on a virtual entity (VE) that exhibits similar behavior to its physical counterpart, and that is coupled to this physical entity (PE). The VE thus forms a core part of any digital twin. While VEs may differ vastly—from ones based on simple simulation to high-fidelity virtual mirroring of the corresponding PE—they are typically composed of multiple models that may originate from multiple domains, address different aspects, and are expressed and processed using different tools and languages. Furthermore, the use of time series data—whether historical or real-time or both—from the PE distinguishes VEs from mere simulations. As a consequence of the modeling landscape complexity and the data aspect of VEs, the design of a digital twin and specifically of the VE as part of it represents several challenges. In this paper, we present our vision for the development, evolution, maintenance, and verification of such virtual entities for digital twins

    LaMa: a thematic labelling web application

    Get PDF
    Qualitative analysis of data is relevant for a variety of domains including empirical research studies and social sciences. While performing qualitative analysis of large textual data sets such as data from interviews, surveys, mailing lists, and code repositories, condensing pieces of data into a set of terms or keywords simplifies analysis, and helps in obtaining useful insight. This condensation of data can be achieved by associating keywords, a.k.a. labels, with text fragments, a.k.a artifacts. It is essential during this type of research to achieve greater accuracy, facilitate collaboration, build consensus, and limit bias. LaMa, short for Labelling Machine, is an open source web application developed for aiding in thematic analysis of qualitative data. The source code and the documentation of the tool are available at https://github.com/muctadir/lama. In addition to being open-source, LaMa facilitates thematic analysis through features such as artifact based collaborative labelling, consensus building through conflict resolution techniques, grouping of labels into themes, and private installation with complete control over research data. With the help of this tool and flow it enforces, thematic analysis becomes less time consuming and more structured
    corecore